DRAM-Aware Last-Level Cache Replacement
نویسندگان
چکیده
The cost of last-level cache misses and evictions depend significantly on three major performance-related characteristics of DRAM-based main memory systems: bank-level parallelism, row buffer locality, and write-caused interference. Bank-level parallelism and row buffer locality introduce different latency costs for the processor to service misses: parallel or serial, fast or slow. Write-caused interference can cause writebacks of dirty cache lines to delay the service of subsequent reads and other writes, making the cost of an eviction different for different cache lines. This paper makes a case for DRAM-aware last-level cache design. We show that designing the last-level-cache replacement policy to be aware of major DRAM characteristics can significantly enhance entire system performance. We show that evicting cache lines that minimize the cost of misses and write-caused interference significantly outperforms conventional DRAM-unaware replacement policies on both single-core and multi-core systems, by 11.4% and 12.3% respectively.
منابع مشابه
DRAM-Aware Last-Level Cache Writeback: Reducing Write-Caused Interference in Memory Systems
Read and write requests from a processor contend for the main memory data bus. System performance depends heavily on when read requests are serviced since they are required for an application’s forward progress whereas writes do not need to be performed immediately. However, writes eventually have to be written to memory because the storage required to buffer them on-chip is limited. In modern ...
متن کاملDRAM Aware Last-Level-Cache Policies for Multi-core Systems
x latency DTC in two cycles. In contrast, state-of-the-art DRAM cache always reads the tags from DRAM cache that incurs high tag lookup latencies of up to 41 cycles. In summary, high DRAM cache hit latencies, increased inter-core interference, increased inter-core cache eviction, and the large application footprint of complex applications necessitates efficient policies in order to satisfy the ...
متن کاملHaakon Dybdahl Architectural Techniques to Improve Cache Utilization
The cache is a memory area where recently accessed data is stored for fast access. The size of the cache has grown rapidly during the last decades to compensate for the increasingly slower main memory. The state-of-the-art memory hierarchy has multiple levels of cache with the last-level being the largest and slowest. The area of the chip dedicated to the last-level cache is substantial, but th...
متن کاملAn Efficient Design and Implementation of Multi-level Cache for Database Systems
Flash-based solid state device(SSD) is making deep inroads into enterprise database applications due to its faster data access. The capacity and performance characteristics of SSD make it well-suited for use as a second-level buffer cache. In this paper, we propose a SSD-based multilevel buffer scheme, called flash-aware second-level cache(FASC), where SSD serves as an extension of the DRAM buf...
متن کاملSTT-RAM Aware Last-Level-Cache Policies for Simultaneous Energy and Performance Improvement
High capacity Last Level Cache (LLC) architectures have been proposed to mitigate the widening processor-memory speed gap. These LLC architectures have been realized using DRAM or SpinTransfer-Torque Random Access Memory (STT-RAM) memory technologies. It has been shown that STT-RAM LLC provides improved energy efficiency compared to DRAM LLC. However, existing STT-RAM LLC suffers from increased...
متن کامل